1,751 research outputs found

    Astrophysically robust systematics removal using variational inference: application to the first month of Kepler data

    Full text link
    Space-based transit search missions such as Kepler are collecting large numbers of stellar light curves of unprecedented photometric precision and time coverage. However, before this scientific goldmine can be exploited fully, the data must be cleaned of instrumental artefacts. We present a new method to correct common-mode systematics in large ensembles of very high precision light curves. It is based on a Bayesian linear basis model and uses shrinkage priors for robustness, variational inference for speed, and a de-noising step based on empirical mode decomposition to prevent the introduction of spurious noise into the corrected light curves. After demonstrating the performance of our method on a synthetic dataset, we apply it to the first month of Kepler data. We compare the results, which are publicly available, to the output of the Kepler pipeline's pre-search data conditioning, and show that the two generally give similar results, but the light curves corrected using our approach have lower scatter, on average, on both long and short timescales. We finish by discussing some limitations of our method and outlining some avenues for further development. The trend-corrected data produced by our approach are publicly available.Comment: 15 pages, 13 figures, accepted for publication in MNRA

    Anomaly Detection and Removal Using Non-Stationary Gaussian Processes

    Full text link
    This paper proposes a novel Gaussian process approach to fault removal in time-series data. Fault removal does not delete the faulty signal data but, instead, massages the fault from the data. We assume that only one fault occurs at any one time and model the signal by two separate non-parametric Gaussian process models for both the physical phenomenon and the fault. In order to facilitate fault removal we introduce the Markov Region Link kernel for handling non-stationary Gaussian processes. This kernel is piece-wise stationary but guarantees that functions generated by it and their derivatives (when required) are everywhere continuous. We apply this kernel to the removal of drift and bias errors in faulty sensor data and also to the recovery of EOG artifact corrupted EEG signals.Comment: 9 pages, 14 figure

    A Multi-Dimensional Trust Model for Heterogeneous Contract Observations

    No full text
    In this paper we develop a novel probabilistic model of computational trust that allows agents to exchange and combine reputation reports over heterogeneous, correlated multi-dimensional contracts. We consider the specific case of an agent attempting to procure a bundle of services that are subject to correlated quality of service failures (e.g. due to use of shared resources or infrastructure), and where the direct experience of other agents within the system consists of contracts over different combinations of these services. To this end, we present a formalism based on the Kalman filter that represents trust as a vector estimate of the probability that each service will be successfully delivered, and a covariance matrix that describes the uncertainty and correlations between these probabilities. We describe how the agentsā€™ direct experiences of contract outcomes can be represented and combined within this formalism, and we empirically demonstrate that our formalism provides significantly better trustworthiness estimates than the alternative of using separate single-dimensional trust models for each separate service (where information regarding the correlations between each estimate is lost)

    Efficient State-Space Inference of Periodic Latent Force Models

    Get PDF
    Latent force models (LFM) are principled approaches to incorporating solutions to differential equations within non-parametric inference methods. Unfortunately, the development and application of LFMs can be inhibited by their computational cost, especially when closed-form solutions for the LFM are unavailable, as is the case in many real world problems where these latent forces exhibit periodic behaviour. Given this, we develop a new sparse representation of LFMs which considerably improves their computational efficiency, as well as broadening their applicability, in a principled way, to domains with periodic or near periodic latent forces. Our approach uses a linear basis model to approximate one generative model for each periodic force. We assume that the latent forces are generated from Gaussian process priors and develop a linear basis model which fully expresses these priors. We apply our approach to model the thermal dynamics of domestic buildings and show that it is effective at predicting day-ahead temperatures within the homes. We also apply our approach within queueing theory in which quasi-periodic arrival rates are modelled as latent forces. In both cases, we demonstrate that our approach can be implemented efficiently using state-space methods which encode the linear dynamic systems via LFMs. Further, we show that state estimates obtained using periodic latent force models can reduce the root mean squared error to 17% of that from non-periodic models and 27% of the nearest rival approach which is the resonator model.Comment: 61 pages, 13 figures, accepted for publication in JMLR. Updates from earlier version occur throughout article in response to JMLR review

    Efficient state-space inference of periodic latent force models

    Get PDF
    Latent force models (LFM) are principled approaches to incorporating solutions to differen-tial equations within non-parametric inference methods. Unfortunately, the developmentand application of LFMs can be inhibited by their computational cost, especially whenclosed-form solutions for the LFM are unavailable, as is the case in many real world prob-lems where these latent forces exhibit periodic behaviour. Given this, we develop a newsparse representation of LFMs which considerably improves their computational efficiency,as well as broadening their applicability, in a principled way, to domains with periodic ornear periodic latent forces. Our approach uses a linear basis model to approximate onegenerative model for each periodic force. We assume that the latent forces are generatedfrom Gaussian process priors and develop a linear basis model which fully expresses thesepriors. We apply our approach to model the thermal dynamics of domestic buildings andshow that it is effective at predicting day-ahead temperatures within the homes. We alsoapply our approach within queueing theory in which quasi-periodic arrival rates are mod-elled as latent forces. In both cases, we demonstrate that our approach can be implemented efficiently using state-space methods which encode the linear dynamic systems via LFMs.Further, we show that state estimates obtained using periodic latent force models can re-duce the root mean squared error to 17% of that from non-periodic models and 27% of thenearest rival approach which is the resonator model (S Ģˆarkk Ģˆa et al., 2012; Hartikainen et al.,2012.

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)
    • ā€¦
    corecore